Goto

Collaborating Authors

 possibility distribution


Attribute Fusion-based Classifier on Framework of Belief Structure

Hu, Qiying, Liang, Yingying, Zhou, Qianli, Pedrycz, Witold

arXiv.org Artificial Intelligence

Abstract--Dempster-Shafer Theory (DST) provides a powerful framework for modeling uncertainty and has been widely applied to multi-attribute classification tasks. However, traditional DST - based attribute fusion-based classifiers suffer from oversimplified membership function modeling and limited exploitation of the belief structure brought by basic probability assignment (BPA), reducing their effectiveness in complex real-world scenarios. This paper presents an enhanced attribute fusion-based classifier that addresses these limitations through two key innovations. First, we adopt a selective modeling strategy that utilizes both single Gaussian and Gaussian Mixture Models (GMMs) for membership function construction, with model selection guided by cross-validation and a tailored evaluation metric. Second, we introduce a novel method to transform the possibility distribution into a BPA by combining simple BPAs derived from normalized possibility distributions, enabling a much richer and more flexible representation of uncertain information. Furthermore, we apply the belief structure-based BPA generation method to the evidential K-Nearest Neighbors (EKNN) classifier, enhancing its ability to incorporate uncertainty information into decision-making. Comprehensive experiments on benchmark datasets are conducted to evaluate the performance of the proposed attribute fusion-based classifier and the enhanced evidential K-Nearest Neighbors classifier in comparison with both evidential classifiers and conventional machine learning classifiers. The results demonstrate that the proposed classifier outperforms the best existing evidential classifier, achieving an average accuracy improvement of 4.86%, while maintaining low variance, thus confirming its superior effectiveness and robustness.


Perception-Informed Neural Networks: Beyond Physics-Informed Neural Networks

Mazandarani, Mehran, Najariyan, Marzieh

arXiv.org Artificial Intelligence

This article introduces Perception-Informed Neural Networks (PrINNs), a framework designed to incorporate perception-based information into neural networks, addressing both systems with known and unknown physics laws or differential equations. Moreover, PrINNs extend the concept of Physics-Informed Neural Networks (PINNs) and their variants, offering a platform for the integration of diverse forms of perception precisiation, including singular, probability distribution, possibility distribution, interval, and fuzzy graph. In fact, PrINNs allow neural networks to model dynamical systems by integrating expert knowledge and perception-based information through loss functions, enabling the creation of modern data-driven models. Some of the key contributions include Mixture of Experts Informed Neural Networks (MOEINNs), which combine heterogeneous expert knowledge into the network, and Transformed-Knowledge Informed Neural Networks (TKINNs), which facilitate the incorporation of meta-information for enhanced model performance. Additionally, Fuzzy-Informed Neural Networks (FINNs) as a modern class of fuzzy deep neural networks leverage fuzzy logic constraints within a deep learning architecture, allowing online training without pre-training and eliminating the need for defuzzification. PrINNs represent a significant step forward in bridging the gap between traditional physics-based modeling and modern data-driven approaches, enabling neural networks to learn from both structured physics laws and flexible perception-based rules. This approach empowers neural networks to operate in uncertain environments, model complex systems, and discover new forms of differential equations, making PrINNs a powerful tool for advancing computational science and engineering.


Isopignistic Canonical Decomposition via Belief Evolution Network

Zhou, Qianli, Zhan, Tianxiang, Deng, Yong

arXiv.org Artificial Intelligence

Developing a general information processing model in uncertain environments is fundamental for the advancement of explainable artificial intelligence. Dempster-Shafer theory of evidence is a well-known and effective reasoning method for representing epistemic uncertainty, which is closely related to subjective probability theory and possibility theory. Although they can be transformed to each other under some particular belief structures, there remains a lack of a clear and interpretable transformation process, as well as a unified approach for information processing. In this paper, we aim to address these issues from the perspectives of isopignistic belief functions and the hyper-cautious transferable belief model. Firstly, we propose an isopignistic transformation based on the belief evolution network. This transformation allows for the adjustment of the information granule while retaining the potential decision outcome. The isopignistic transformation is integrated with a hyper-cautious transferable belief model to establish a new canonical decomposition. This decomposition offers a reverse path between the possibility distribution and its isopignistic mass functions. The result of the canonical decomposition, called isopignistic function, is an identical information content distribution to reflect the propensity and relative commitment degree of the BPA. Furthermore, this paper introduces a method to reconstruct the basic belief assignment by adjusting the isopignistic function. It explores the advantages of this approach in modeling and handling uncertainty within the hyper-cautious transferable belief model. More general, this paper establishes a theoretical basis for building general models of artificial intelligence based on probability theory, Dempster-Shafer theory, and possibility theory. Introduction Dempster-Shafer (DS) theory of evidence, also known as belief function theory, is an effective artificial intelligence tool for modeling and handling uncertainty in partial knowledge environments.


Contextual Mixture of Experts: Integrating Knowledge into Predictive Modeling

Souza, Francisco, Offermans, Tim, Barendse, Ruud, Postma, Geert, Jansen, Jeroen

arXiv.org Artificial Intelligence

This work proposes a new data-driven model devised to integrate process knowledge into its structure to increase the human-machine synergy in the process industry. The proposed Contextual Mixture of Experts (cMoE) explicitly uses process knowledge along the model learning stage to mold the historical data to represent operators' context related to the process through possibility distributions. This model was evaluated in two real case studies for quality prediction, including a sulfur recovery unit and a polymerization process. The contextual mixture of experts was employed to represent different contexts in both experiments. The results indicate that integrating process knowledge has increased predictive performance while improving interpretability by providing insights into the variables affecting the process's different regimes.


An Evidential Neural Network Model for Regression Based on Random Fuzzy Numbers

Denoeux, Thierry

arXiv.org Artificial Intelligence

We introduce a distance-based neural network model for regression, in which prediction uncertainty is quantified by a belief function on the real line. The model interprets the distances of the input vector to prototypes as pieces of evidence represented by Gaussian random fuzzy numbers (GRFN's) and combined by the generalized product intersection rule, an operator that extends Dempster's rule to random fuzzy sets. The network output is a GRFN that can be summarized by three numbers characterizing the most plausible predicted value, variability around this value, and epistemic uncertainty. Experiments with real datasets demonstrate the very good performance of the method as compared to state-of-the-art evidential and statistical learning algorithms.


Benferhat

AAAI Conferences

Interval-based possibilistic logic is a flexible setting extending standard possibilistic logic such that each logical expression is associated with a sub-interval of [0,1]. This paper focuses on the fundamental issue of conditioning in the interval-based possibilistic setting. The first part of the paper first proposes a set of natural properties that an interval-based conditioning operator should satisfy. We then give a natural and safe definition for conditioning an interval-based possibility distribution. This definition is based on applying standard min-based or product-based conditioning on the set of all associated compatible possibility distributions. We analyze the obtained posterior distributions and provide a precise characterization of lower and upper endpoints of the intervals associated with interpretations. The second part of the paper provides an equivalent syntactic computation of interval-based conditioning when interval-based distributions are compactly encoded by means of interval-based possibilistic knowledge bases. We show that interval-based conditioning is achieved without extra computational cost comparing to conditioning standard possibilistic knowledge bases.


Simplified Kripke semantics for K45-like Godel modal logics and its axiomatic extensions

Rodriguez, Ricardo, Tuyt, Olim, Godo, Lluis, Esteva, Francesc

arXiv.org Artificial Intelligence

In this paper, we provide simplified semantics for the logic K45(G), i.e. the many-valued Godel counterpart of the classical modal logic K45. More precisely, we characterize K45(G) as the set of valid formulae of the class of possibilistic Godel Kripke Frames where W is a non-empty set of worlds and \pi: W \to [0, 1] is a possibility distribution on W.


Conditional Generative Adversarial Networks for Optimal Path Planning

Ma, Nachuan, Wang, Jiankun, Meng, Max Q. -H.

arXiv.org Artificial Intelligence

Path planning plays an important role in autonomous robot systems. Effective understanding of the surrounding environment and efficient generation of optimal collision-free path are both critical parts for solving path planning problem. Although conventional sampling-based algorithms, such as the rapidly-exploring random tree (RRT) and its improved optimal version (RRT*), have been widely used in path planning problems because of their ability to find a feasible path in even complex environments, they fail to find an optimal path efficiently. To solve this problem and satisfy the two aforementioned requirements, we propose a novel learning-based path planning algorithm which consists of a novel generative model based on the conditional generative adversarial networks (CGAN) and a modified RRT* algorithm (denoted by CGANRRT*). Given the map information, our CGAN model can generate an efficient possibility distribution of feasible paths, which can be utilized by the CGAN-RRT* algorithm to find the optimal path with a non-uniform sampling strategy. The CGAN model is trained by learning from ground truth maps, each of which is generated by putting all the results of executing RRT algorithm 50 times on one raw map. We demonstrate the efficient performance of this CGAN model by testing it on two groups of maps and comparing CGAN-RRT* algorithm with conventional RRT* algorithm.


SPOCC: Scalable POssibilistic Classifier Combination -- toward robust aggregation of classifiers

Albardan, Mahmoud, Klein, John, Colot, Olivier

arXiv.org Machine Learning

When several predictors have been trained to solve the same classification task, a second level of algorithmic procedure is necessary to reconcile the classifier predictions and deliver a single one. Such a procedure is known as classifier combination, fusion or aggregation. When each individual classifier is trained using the same training algorithm (but under different circumstances) the aggregation procedure is referred to as an ensemble method. When each classifier may be generated by different training algorithms, the aggregation procedure is referred to as a multiple classifier system. In both cases, the set of individual classifiers is called a classifier ensemble. Classifier combination comes either from a choice of the programmer or is imposed by context. In the first case, combination is meant to increase classification performances by either increasing the learning capacity or mitigating 1 arXiv:1908.06475v1


A Simple View of the Dempster-Shafer Theory of Evidence and its Implication for the Rule of Combination

AI Magazine

The emergence of expert systems as one of the major areas of activity within AI has resulted in a rapid growth of interest within the AI community in issues relating to the management of uncertainty and evidential reasoning. During the past two years, in particular, the Dempster-Shafer theory of evidence has att,ract,ed considerable attention as a promising method of dealing with some of the basic problems arising in combination of evidence and data fusion. To develop an adequate understanding of this theory requires considerable effort and a good background in probability theory. There is, however, a simple way of approaching the Dempster-Shafer theory that only requires a minimal familiarity with relational models of data. For someone with a background in AI or database management, this approach has the advantage of relating in a natural way to the familiar framework of AI and databases.